Eecient Covariance Matrix Methods for Bayesian Gaussian Processes and Hoppeld Neural Networks
نویسندگان
چکیده
Covariance matrices are important in many areas of neural modelling. In Hop eld networks they are used to form the weight matrix which controls the autoassociative properties of the network. In Gaussian processes, which have been shown to be the in nite neuron limit of many regularised feedforward neural networks, covariance matrices control the form of Bayesian prior distribution over function space. This thesis examines interesting modi cations to the standard covariance matrix methods to increase functionality or e ciency of these neural techniques. Firstly the problem of adapting Gaussian process priors to perform regression on switching regimes is tackled. This involves the use of block covariance matrices and Gibbs sampling methods. Then the use of Toeplitz methods is proposed for Gaussian process regression where sampling positions can be chosen. A comparison is made between Hop eld weight matrices, and sample covariances. This allows work on sample covariances to be used to estimate the eigenvalue structure of standard Hop eld weight matrices. This structure of Hop eld weight matrices enables the development of e cient incremental and local learning techniques for Hop eld networks. A new Hop eld learning rule is introduced. It is shown both empirically and mathematically that such learning methods give a higher capacity than the usual Hebb rule. Furthermore it is shown that this rule is more able to deal with the storage of correlated patterns. The basins of attraction of this learning rule are examined empirically. It is found that they are larger, rounder and more evenly distributed than those of the Hebb rule. Lastly we show using iterated function sequence methods that the learning rule acts as a palimpsest (or forgetful) learning rule, and does not su er from catastrophic forgetting. 3
منابع مشابه
Eecient Implementation of Gaussian Processes
Neural networks and Bayesian inference provide a useful framework within which to solve regression problems. However their parameterization means that the Bayesian analysis of neural networks can be diicult. In this paper, we investigate a method for regression using Gaussian process priors which allows exact Bayesian analysis using matrix manipulations. We discuss the workings of the method in...
متن کاملTruncated Covariance Matrices and Toeplitz Methods in Gaussian Processes
Gaussian processes are a limit extension of neural networks. Standard Gaussian process techniques use a squared exponential covariance function. Here, the use of truncated covariances is proposed. Such cov-ariances have compact support. Their use speeds up matrix inversion and increases precision. Furthermore they allow the use of speedy, memory eecient Toeplitz inversion for high dimensional g...
متن کاملSimultaneous Monitoring of Multivariate-Attribute Process Mean and Variability Using Artificial Neural Networks
In some statistical process control applications, the quality of the product is characterized by thecombination of both correlated variable and attributes quality characteristics. In this paper, we propose anovel control scheme based on the combination of two multi-layer perceptron neural networks forsimultaneous monitoring of mean vector as well as the covariance matrix in multivariate-attribu...
متن کاملDeep Neural Networks as Gaussian Processes
A deep fully-connected neural network with an i.i.d. prior over its parameters is equivalent to a Gaussian process (GP) in the limit of infinite network width. This correspondence enables exact Bayesian inference for neural networks on regression tasks by means of straightforward matrix computations. For single hiddenlayer networks, the covariance function of this GP has long been known. Recent...
متن کاملDeep Bayesian Neural Nets as Deep Matrix Gaussian Processes
We show that by employing a distribution over random matrices, the matrix variate Gaussian Gupta & Nagar (1999), for the neural network parameters we can obtain a non-parametric interpretation for the hidden units after the application of the “local reprarametrization trick” (Kingma et al., 2015). This provides a nice duality between Bayesian neural networks and deep Gaussian Processes Damianou...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1999